A PAC-Bayes Sample-compression Approach to Kernel Methods
نویسندگان
چکیده
We propose a PAC-Bayes sample compression approach to kernel methods that can accommodate any bounded similarity function and show that the support vector machine (SVM) classifier is a particular case of a more general class of data-dependent classifiers known as majority votes of samplecompressed classifiers. We provide novel risk bounds for these majority votes and learning algorithms that minimize these bounds.
منابع مشابه
PAC-Bayes Risk Bounds for Stochastic Averages and Majority Votes of Sample-Compressed Classifiers
We propose a PAC-Bayes theorem for the sample-compression setting where each classifier is described by a compression subset of the training data and a message string of additional information. This setting, which is the appropriate one to describe many learning algorithms, strictly generalizes the usual data-independent setting where classifiers are represented only by data-independent message...
متن کاملRisk Bounds for Randomized Sample Compressed Classifiers
We derive risk bounds for the randomized classifiers in Sample Compression setting where the classifier-specification utilizes two sources of information viz. the compression set and the message string. By extending the recently proposed Occam’s Hammer principle to the data-dependent settings, we derive point-wise versions of the bounds on the stochastic sample compressed classifiers and also r...
متن کاملOn the PAC-Bayes Bound Calculation based on Reproducing Kernel Hilbert Space
PAC-Bayes risk bound combining Bayesian theory and structure risk minimization for stochastic classifiers has been considered as a framework for deriving some of the tightest generalization bounds. A major issue for calculating the bound is the unknown prior and posterior distributions of the concept space. In this paper, we formulated the concept space as Reproducing Kernel Hilbert Space (RKHS...
متن کاملPAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model
We develop a PAC-Bayesian bound for the convergence rate of a Bayesian variant of Multiple Kernel Learning (MKL) that is an estimation method for the sparse additive model. Standard analyses for MKL require a strong condition on the design analogous to the restricted eigenvalue condition for the analysis of Lasso and Dantzig selector. In this paper, we apply PAC-Bayesian technique to show that ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011